23 research outputs found

    The Complexity of Rationalizing Network Formation

    Get PDF
    We study the complexity of rationalizing network formation. In this problem we fix an underlying model describing how selfish parties (the vertices) produce a graph by making individual decisions to form or not form incident edges. The model is equipped with a notion of stability (or equilibrium), and we observe a set of "snapshots" of graphs that are assumed to be stable. From this we would like to infer some unobserved data about the system: edge prices, or how much each vertex values short paths to each other vertex. We study two rationalization problems arising from the network formation model of Jackson and Wolinsky [14]. When the goal is to infer edge prices, we observe that the rationalization problem is easy. The problem remains easy even when rationalizing prices do not exist and we instead wish to find prices that maximize the stability of the system. In contrast, when the edge prices are given and the goal is instead to infer valuations of each vertex by each other vertex, we prove that the rationalization problem becomes NP-hard. Our proof exposes a close connection between rationalization problems and the Inequality-SAT (I-SAT) problem. Finally and most significantly, we prove that an approximation version of this NP-complete rationalization problem is NP-hard to approximate to within better than a 1/2 ratio. This shows that the trivial algorithm of setting everyone's valuations to infinity (which rationalizes all the edges present in the input graphs) or to zero (which rationalizes all the non-edges present in the input graphs) is the best possible assuming P ≠ NP To do this we prove a tight (1/2 + δ) -approximation hardness for a variant of I-SAT in which all coefficients are non-negative. This in turn follows from a tight hardness result for MAX-LlN_(R_+) (linear equations over the reals, with non-negative coefficients), which we prove by a (non-trivial) modification of the recent result of Guruswami and Raghavendra [10] which achieved tight hardness for this problem without the non-negativity constraint. Our technical contributions regarding the hardness of I-SAT and MAX-LIN_(R_+) may be of independent interest, given the generality of these problem

    Limited Randomness in Games, and Computational Perspectives in Revealed Preference

    Get PDF
    In this dissertation, we explore two particular themes in connection with the study of games and general economic interactions: bounded resources and rationality. The rapidly maturing field of algorithmic game theory concerns itself with looking at the computational limits and effects when agents in such an interaction make choices in their "self-interest." The solution concepts that have been studied in this regard, and which we shall focus on in this dissertation, assume that agents are capable of randomizing over their set of choices. We posit that agents are randomness-limited in addition to being computationally bounded, and determine how this affects their equilibrium strategies in different scenarios. In particular, we study three interpretations of what it means for agents to be randomness-limited, and offer results on finding (approximately) optimal strategies that are randomness-efficient: 1. One-shot games with access to the support of the optimal strategies: for this case, our results are obtained by sampling strategies from the optimal support by performing a random walk on an expander graph. 2. Multiple-round games where agents have no a priori knowledge of their payoffs: we significantly improve the randomness-efficiency of known online algorithms for such games by utilizing distributions based on almost pairwise independent random variables. 3. Low-rank games: for games in which agents' payoff matrices have low rank, we devise "fixed-parameter" algorithms that compute strategies yielding approximately optimal payoffs for agents, and are polynomial-time in the size of the input and the rank of the payoff tensors. In regard to rationality, we look at some computational questions in a related line of work known as revealed preference theory, with the purpose of understanding the computational limits of inferring agents' payoffs and motives when they reveal their preferences by way of how they act. We investigate two problem settings as applications of this theory and obtain results about their intractability: 1. Rationalizability of matchings: we consider the problem of rationalizing a given collection of bipartite matchings and show that it is NP-hard to determine agent preferences for which matchings would be stable. Further, we show, assuming P ≠ NP, that this problem does not admit polynomial-time approximation schemes under two suitably defined notions of optimization. 2. Rationalizability of network formation games: in the case of network formation games, we take up a particular model of connections known as the Jackson-Wolinsky model in which nodes in a graph have valuations for each other and take their valuations into consideration when they choose to build edges. We show that under a notion of stability, known as pairwise stability, the problem of finding valuations that rationalize a collection of networks as pairwise stable is NP-hard. More significantly, we show that this problem is hard even to approximate to within a factor 1/2 and that this is tight. Our results on hardness and inapproximability of these problems use well-known techniques from complexity theory, and particularly in the case of the inapproximability of rationalizing network formation games, PCPs for the problem of satisfying the optimal number of linear equations in positive integers, building on recent results of Guruswami and Raghavendra.</p

    On Obtaining Pseudorandomness from Error-Correcting Codes

    Get PDF
    Constructing pseudorandom objects based on codes has been the focus of some recent research. These constructions were based on specific algebraic codes and were rather simple in their structure in that a random index into a codeword was picked and mm subsequent symbols output. In this work, we explore the question of whether it is possible to extend the scope of application of this paradigm of constructions to larger families of codes. We show in this work that there exist such pseudorandom objects based on cyclic, linear codes that fool linear tests. When restricted to just algebraic codes, our techniques yield constructions that fool low-degree tests. Specifically, our results show that Reed-Solomon codes can be used to obtain pseudorandom objects, albeit in a weakened form. To the best of our knowledge, this is the first instance of Reed-Solomon codes being used to this effect. In the process, we also touch upon one of the holy grails of derandomization. It should come as no surprise that pseudorandom objects that fool low-degree tests are automatically correlated to derandomizing polynomial identity testing. We look at whether our constructions are general enough to answer this important question and while we come up short in our endeavor, we believe our approach adds a new perspective to this problem and hopefully a meaningful opening to solving it.</p

    On Obtaining Pseudorandomness from Error-Correcting Codes

    No full text
    A number of recent results have constructed randomness extractors and pseudorandom generators (PRGs) directly from certain error-correcting codes. The underlying construction in these results amounts to picking a random index into the codeword and outputting m consecutive symbols (the codeword is obtained from the weak random source in the case of extractors, and from a hard function in the case of PRGs). We study this construction applied to general cyclic error-correcting codes, with the goal of understanding what pseudorandom objects it can produce. We show that every cyclic code with sufficient distance yields extractors that fool all linear tests. Further, we show that every polynomial code with sufficient distance yields extractors that fool all low-degree prediction tests. These are the first results that apply to univariate (rather than multivariate) polynomial codes, hinting that Reed-Solomon codes may yield good randomness extractors. Our proof technique gives rise to a systematic way of producing unconditional PRGs against restricted classes of tests. In particular, we obtain PRGs fooling all linear tests (which amounts to a construction of ε-biased spaces), and we obtain PRGs fooling all low-degree prediction tests

    Algorithms for Playing Games with Limited Randomness

    No full text
    We study multiplayer games in which the participants have access to only limited randomness. This constrains both the algorithms used to compute equilibria (they should use little or no randomness) as well as the mixed strategies that the participants are capable of playing (these should be sparse). We frame algorithmic questions that naturally arise in this setting, and resolve several of them. We give deterministic algorithms that can be used to find sparse ε-equilibria in zero-sum and non-zero-sum games, and a randomness-efficient method for playing repeated zero-sum games. These results apply ideas from derandomization (expander walks, and δ-independent sample spaces) to the algorithms of Lipton, Markakis, and Mehta [LMM03], and the online algorithm of Freund and Schapire [FS99]. Subsequently, we consider a large class of games in which sparse equilibria are known to exist (and are therefore amenable to randomness-limited players), namely games of small rank. We give the first “fixed-parameter” algorithms for obtaining approximate equilibria in these games. For rank-k games, we give a deterministic algorithm to find (k + 1)-sparse ε-equilibria in time polynomial in the input size n and some function f(k,1/ε). In a similar setting Kannan and Theobald [KT07] gave an algorithm whose run-time is n^(O(k)). Our algorithm works for a slightly different notion of a game’s “rank,” but is fixed parameter tractable in the above sense, and extends easily to the multi-player case

    Abstract

    No full text
    A number of recent results have constructed randomness extractors and pseudorandom generators (PRGs) directly from certain error-correcting codes. The underlying construction in these results amounts to picking a random index into the codeword and outputting m consecutive symbols (the codeword is obtained from the weak random source in the case of extractors, and from a hard function in the case of PRGs). We study this construction applied to general cyclic error-correcting codes, with the goal of understanding what pseudorandom objects it can produce. We show that every cyclic code with sufficient distance yields extractors that fool all linear tests. Further, we show that every polynomial code with sufficient distance yields extractors that fool all low-degree prediction tests. These are the first results that apply to univariate (rather than multivariate) polynomial codes, hinting that Reed-Solomon codes may yield good randomness extractors. Our proof technique gives rise to a systematic way of producing unconditional PRGs against restricted classes of tests. In particular, we obtain PRGs fooling all linear tests (which amounts to a construction of ɛ-biased spaces), and we obtain PRGs fooling all low-degree prediction tests

    Geometrical Analysis of Spatio-temporal Planning Problems

    Get PDF
    In this thesis I represent and analyze spatially and temporally constrained multi-agent planning problems using tools from geometry and advanced calculus. The two problems considered in this thesis are multi-agent rendezvous and dynamic sensor coverage. Together, these problems encompass the cooperation, constraint representation,and task scheduling aspects of multi-agent planning problems. I have represented the constraint of the rendezvous problem on the phase space and shown that the fulfilment of rendezvous constraints is equivalent to certain conical regions being invariant. Alternatively, for the dynamic coverage problem, the constraints can be adequately represented on the uncertainty space and sensor motion laws can be obtained by partitioning the uncertainty space and making decisions based on which partition the uncertainty lies in. I have examined convergence behavior of sensor motion under such laws
    corecore